Click Here!
home account info subscribe login search My ITKnowledge FAQ/help site map contact us


 
Brief Full
 Advanced
      Search
 Search Tips
To access the contents, click the chapter and section titles.

Oracle Performance Tuning and Optimization
(Publisher: Macmillan Computer Publishing)
Author(s): Edward Whalen
ISBN: 067230886x
Publication Date: 04/01/96

Bookmark It

Search this book:
 
Previous Table of Contents Next


Results of TPC Benchmarks

The current set of TPC results is available to the public from the TPC Web site:

http://www.tpc.org/

The Web site has information on the TPC benchmarks themselves as well as individual and complete results. Individual results are in the form of the Executive Summary. The Executive Summary is usually found in the front section of the Full Disclosure Report and contains the performance metrics, a listing of the hardware and software used in the benchmark, and a price breakdown. The Executive Summary also includes a diagram of the benchmarked configuration.

The TPC results spreadsheet is available in SYLK, Adobe PDF, and Excel formats and contains a complete list of the current TPC benchmark results. As benchmarks become obsolete (as was the case for the TPC-A and TPC-B benchmarks), the results are removed from the official list. As new versions of benchmarks become available, there is usually a lag time during which the results from the old versions remain on the spreadsheet (eventually, the old results fall off the list). In some cases—such as for the change from TPC-C version 2.x to 3.0—benchmark results can be upgraded for a limited time.

Interpreting the Spreadsheet

I have refrained from inserting a copy of the spreadsheet in this book because it would have been out of date before the book left the printer. However, I do want to explain how to interpret the spreadsheet. Although the format of the spreadsheet changes infrequently, the concepts remain the same.

The spreadsheet is divided into sections that separate the results for each benchmark and major revision. It usually starts with new results, followed by the individual result sets in alphabetical order. Within each section, the results are sorted alphabetically by sponsor. The following information refers to the System Under Test (SUT) used in the benchmark. To get information about front-end machines, you must look at the Executive Summary for that benchmark. The following list describes the columns in the results summary spreadsheet:

  Company: The sponsor of the benchmark. Typically, a benchmark has more than one sponsor (perhaps a hardware company and an OS company, of which one is considered the primary sponsor).
  System: The name and model number of the benchmarked hardware.
  Spec Revision: The version of the specification under which the benchmark was published. Because minor revisions are comparable, they are listed together.
  Throughput: The primary performance metric. The performance of the system.
  Price performance: Another primary metric.
  Total system cost: The priced components are determined in each benchmark specification.
  Database software: The name and version of the database software used in the benchmark.
  Operating system: The name and version of the operating system used in the benchmark.
  TP monitor: The name and version of the TP monitor used in the benchmark.
  Company: The sponsor of the benchmark. This column is duplicated because the width of the spreadsheet usually forces the table to be broken into two sections for printing.
  System: The name and model number of the benchmarked hardware. Duplicated for purposes of printing the spreadsheet.
  Cluster: Indicates whether the benchmark configuration is configured as a cluster or a single machine.
  MP/UNI: Indicates whether the server is a uniprocessor or multiprocessor computer.
  Qty/Processors/MHz: Describes the number, type, and speed of the CPUs in the system.
  Original received: The date on which the FDR was received at the TPC. At this time, the results can be announced by the sponsor.
  Submitted for review: The date on which 80 additional copies of the FDR were received by the TPC for distribution to the membership.
  Date accepted: The date on which the 60-day challenge period ended. If a challenge is pending, this date is held up.
  Prices updated: Indicates that a pricing change has occurred to the FDR.
  Qrt Rpt issue: Indicates the quarterly report of the TPC in which the benchmark results first appeared.

Having served as a representative on the TPC for my employer, I can attest to the hard work and dedication offered by each of the participants. The results of the organization itself stand as a tribute to the people who have worked so hard to make it a success. One thing that has always impressed me is that even though the benchmark subcommittees are made up of people from competing organizations, there is a great deal of cooperation and friendship among the members.

Although industry standard benchmarks cannot predict how your application will perform on a particular platform, the benchmarks are a good indicator of how competing platforms compare in these environments. Hopefully, TPC results can help you narrow down the choices to make your purchasing decision easier. Of course, the best indication of how well a particular application will run on a particular system is to benchmark it yourself. The process of developing a custom benchmark is briefly discussed later in this chapter.

Publication Benchmarks

Several publications have designed and built their own tests for benchmarking RDBMS products. Typically, the publication provides a good description of the test in the article that uses the results (Ziff-Davis labs and PC Week labs both do these types of tests). Many of these tests are very good and can provide you with insight about the performance of specific areas of RDBMS performance such as loading, backups, and so on.

One thing to look for in these independent tests is the criteria that has been set up for the hardware and RDBMS products. Unlike the TPC benchmarks, these tests have a somewhat different goal in mind. Typically, publication benchmarks target a specific comparison (such as several different RDBMS products running on the same hardware or several hardware platforms running the same OS and database).

These tests can be useful to compare systems but may not always show the most optimal configuration for that platform. Look at these tests carefully and decide whether they apply to you and give you useful information. Decide whether the tested configuration is realistic for you. If so, the results of these tests can be very useful.


Previous Table of Contents Next


Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permission of EarthWeb is prohibited.